Fast and Accurate Training of Multilayer Perceptrons Using anExtended

نویسنده

  • Friedrich Lange
چکیده

The training algorithm EKFNet uses an Extended Kalman Filter for supervised learning of feed-forward neural nets. The big diierence with respect to ordinary backpropagation methods is the calculation of a N N covariance matrix which considers the interdependence of the N weights that have to be optimised. Therefore computing time increases quadratically with the number of weights, thus restricting fast learning to problems with up to about 200 weights. In contrast to other optimization methods training is performed as learning-by-pattern to allow fast convergence in case of long sets of training data. EKFNet is demonstrated for the task of decoupling an industrial robot as well as for a problem of the PROBEN1 benchmark set. In most cases EKFNet proves to learn faster and more accurate than any other method tested.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Improve an Efficiency of Feedforward Multilayer Perceptrons by Serial Training

The Feedforward Multilayer Perceptrons network is a widely used model in Artificial Neural Network using the backpropagation algorithm for real world data. There are two common ways to construct Feedforward Multilayer Perceptrons network, that is, either taking a large network and then pruning away the irrelevant nodes or starting from a small network and then adding new relevant nodes. An Arti...

متن کامل

An Active Learning Algorithm Based on Existing Training Data

A multilayer perceptron is usually considered a passive learner that only receives given training data. However, if a multilayer perceptron actively gathers training data that resolve its uncertainty about a problem being learnt, sufficiently accurate classification is attained with fewer training data. Recently, such active learning has been receiving an increasing interest. In this paper, we ...

متن کامل

Comparing Hybrid Systems to Design and Optimize Artificial Neural Networks

In this paper we conduct a comparative study between hybrid methods to optimize multilayer perceptrons: a model that optimizes the architecture and initial weights of multilayer perceptrons; a parallel approach to optimize the architecture and initial weights of multilayer perceptrons; a method that searches for the parameters of the training algorithm, and an approach for cooperative co-evolut...

متن کامل

Fast training of multilayer perceptrons

Training a multilayer perceptron by an error backpropagation algorithm is slow and uncertain. This paper describes a new approach which is much faster and certain than error backpropagation. The proposed approach is based on combined iterative and direct solution methods. In this approach, we use an inverse transformation for linearization of nonlinear output activation functions, direct soluti...

متن کامل

Training Multilayer Perceptrons with the Extende Kalman Algorithm

A large fraction of recent work in artificial neural nets uses multilayer perceptrons trained with the back-propagation algorithm described by Rumelhart et. a1. This algorithm converges slowly for large or complex problems such as speech recognition, where thousands of iterations may be needed for convergence even with small data sets. In this paper, we show that training multilayer perceptrons...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995